Goto

Collaborating Authors

 softmax-free transformer


SOFT: Softmax-free Transformer with Linear Complexity

Neural Information Processing Systems

Vision transformers (ViTs) have pushed the state-of-the-art for various visual recognition tasks by patch-wise image tokenization followed by self-attention.


Supplementary for SOFT: Softmax-free Transformer with Linear Complexity

Neural Information Processing Systems

According to the eigenfunction's definition, we can get: null k (y,x)φ Li Zhang (lizhangfd@fudan.edu.cn) is the corresponding author with School of Data Science, Fudan In our formulation, instead of directly calculating the Gaussian kernel weights, they are approximated. More specifically, the relation between any two tokens is reconstructed via sampled bottleneck tokens. However, it turns out to suffer from a similar failure. For each model, we show the output from the first two attention heads (up and down row). Attention is all you need.


SOFT: Softmax-free Transformer with Linear Complexity

Neural Information Processing Systems

Vision transformers (ViTs) have pushed the state-of-the-art for various visual recognition tasks by patch-wise image tokenization followed by self-attention. Various attempts on approximating the self-attention computation with linear complexity have been made in Natural Language Processing. However, an in-depth analysis in this work shows that they are either theoretically flawed or empirically ineffective for visual recognition. We further identify that their limitations are rooted in keeping the softmax self-attention during approximations. Specifically, conventional self-attention is computed by normalizing the scaled dot-product between token feature vectors.


Compute-Efficient Medical Image Classification with Softmax-Free Transformers and Sequence Normalization

Khader, Firas, Nahhas, Omar S. M. El, Han, Tianyu, Müller-Franzes, Gustav, Nebelung, Sven, Kather, Jakob Nikolas, Truhn, Daniel

arXiv.org Artificial Intelligence

The Transformer model has been pivotal in advancing fields such as natural language processing, speech recognition, and computer vision. However, a critical limitation of this model is its quadratic computational and memory complexity relative to the sequence length, which constrains its application to longer sequences. This is especially crucial in medical imaging where high-resolution images can reach gigapixel scale. Efforts to address this issue have predominantely focused on complex techniques, such as decomposing the softmax operation integral to the Transformer's architecture. This paper addresses this quadratic computational complexity of Transformer models and introduces a remarkably simple and effective method that circumvents this issue by eliminating the softmax function from the attention mechanism and adopting a sequence normalization technique for the key, query, and value tokens. Coupled with a reordering of matrix multiplications this approach reduces the memory- and compute complexity to a linear scale. We evaluate this approach across various medical imaging datasets comprising fundoscopic, dermascopic, radiologic and histologic imaging data. Our findings highlight that these models exhibit a comparable performance to traditional transformer models, while efficiently handling longer sequences.